Overview of the ImageCLEF 2006 Photographic Retrieval and Object Annotation Tasks
نویسندگان
چکیده
This paper describes the general photographic retrieval and object annotation tasks of the ImageCLEF 2006 evaluation campaign. These tasks provided both the resources and the framework necessary to perform comparative laboratory-style evaluation of visual information systems for image retrieval and automatic image annotation. Both tasks offered something new for 2006 and attracted a large number of submissions: 12 groups participated in ImageCLEFphoto and 3 groups in the automatic annotation task. This paper summarises these two tasks including collections used in the benchmark, the tasks proposed, a summary of submissions from participating groups and the main findings. 1 The Photographic Retrieval Task: ImageCLEFphoto The ImageCLEFphoto task provides the resources for the comparison of system performance in a laboratory-style setting.This kind of evaluation is system-centred and similar to the classic TREC1 (Text REtrieval Conference [1]) ad-hoc retrieval task: simulation of the situation in which a system knows the set of documents to be searched, but search topics are not known to the system in advance. Evaluation aims to compare algorithms and systems, and not assess aspects of user interaction (iCLEF addresses this). The specific goal of ImageCLEFphoto is: given a statement describing a user information need, find as many relevant images as possible from the given document collection (with the query in a language different from that used to describe the images). After three years of evaluation using the St. Andrews database [2], a new database was used in this year’s task: the IAPR TC-12 Benchmark [3], created under Technical Committee 12 (TC-12) of the InternationalAssociation of PatternRecognition (IAPR2). This collectiondiffers from the 1 http://trec.nist.gov/ 2 http://www.iapr.org/ C. Peters et al. (Eds.): CLEF 2006, LNCS 4730, pp. 579–594, 2007. c © Springer-Verlag Berlin Heidelberg 2007 580 P. Clough et al. St Andrews collection used in previous campaigns in two major ways: (1) it contains mainly colour photographs (the St Andrews collection was primarily black and white) and (2) it contains semi-structured captions in English and German (the St Andrews collection used only English). 1.1 Document Collection The ImageCLEFphoto collection contains 20,000 photos taken from locations around the world, comprising a varying cross-section of still natural images on a variety of topics (Fig. 1 shows some examples). The majority of images have been provided by viventura3, an independent travel company organising adventure and language trips to South America. Travel guides accompanying the tourists maintain a daily online diary including photographs of the trips made and general pictures of each location. For example, pictures include accommodation, facilities, people and social projects. The remainder of the images have been collected by the second author over the past few years from personal experiences (e.g. holidays and events). The collection is publicly available for research purposes and unlike many existing photographic collections used to evaluate image retrieval systems, this collection is very general in content. The collection contains many different images of similar visual content, but varying illumination, viewing angle and background. This makes it a challenge for the successful application of techniques involving visual analysis. Fig. 1. Sample images from the IAPR TC-12 collection The content of the collection is varied (and realistic) and associated descriptive annotations have been carefully created and applied in a systematic manner (e.g. all fields contain values of a similar style and format) to all images. Each image in the collection has a corresponding semi-structured caption consisting of the following seven fields (similar to the previous St Andrews collection): (1) a unique identifier, (2) a title, (3) a free-text description of the semantic and visual contents of the image, (4) notes for additional information, (5) the provider of the photo and fields describing (6) where and (7) when the photo was taken. These fields exist in English and German, with a Spanish version currently being verified. Although consistent and careful annotations are typically not found in 3 http://www.viventura.de Overview of the ImageCLEF 2006 Photographic Retrieval 581 practice, the goal of creating this resource was to provide a general-purpose collection which could be used for a variety of research purposes. For example, this year we decided to create a more realistic scenario for participants by releasing a version of the collection with a varying degree of annotation “completeness” (i.e. with different caption fields available for indexing and retrieval). For 2006, the collection contained the following levels of annotation: (a) 70% of the annotations contain title, description, notes, location and date; (b) 10% of the annotations contain title, location and date; (c) 10% of the annotations contain location and date; and (d) 10% of the images are not annotated (or have empty tags respectively).
منابع مشابه
CINDI at ImageCLEF 2006: Image Retrieval & Annotation Tasks for the General Photographic and Medical Image Collections
This paper presents our techniques used and their analysis for the runs made and the results submitted by the CINDI group for the task of the image retrieval and automatic annotation of ImageCLEF 2006. For the ah-hoc image retrieval from both the photographic and medical image collections, we have experimented with cross-modal (image and text) interaction and integration approaches based on the...
متن کاملThe CLEF 2005 Cross-Language Image Retrieval Track
The purpose of this paper is to outline efforts from the 2005 CLEF crosslanguage image retrieval campaign (ImageCLEF). The aim of this CLEF track is to explore the use of both text and content–based retrieval methods for cross–language image retrieval. Four tasks were offered in the ImageCLEF track: a ad–hoc retrieval from an historic photographic collection, ad–hoc retrieval from a medical col...
متن کاملImageCLEF 2013: The Vision, the Data and the Open Challenges
This paper presents an overview of the ImageCLEF 2013 lab. Since its first edition in 2003, ImageCLEF has become one of the key initiative promoting the benchmark evaluation of algorithms for the cross-language annotation and retrieval of images in various domains, from public and personal photo collections to medical images, to data acquired by mobile robot platforms to botanic collections. Ov...
متن کاملImageCLEF 2014: Overview and Analysis of the Results
This paper presents an overview of the ImageCLEF 2014 evaluation lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and medical archives. Over the years, by providing new ...
متن کاملMedIC/CISMeF at ImageCLEF 2006: Image Annotation and Retrieval Tasks
In the 2006 ImageCLEF cross-language image retrieval track, the MedIC/CISMeF group participated at the two medical-related tasks: the automatic annotation task and the multilingual image retrieval task. For the first task we submitted four runs based on supervised classification of combined texture and statistical image representations, the best result being the fourth rank at only 1% of the wi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006